我们介绍了课程学习算法,变分自动课程学习(VIVL),用于解决具有挑战性的目标条件的合作多功能增强学习问题。我们通过变分的角度激励我们的范式,其中学习目标可以分解为两种术语:任务学习当前任务分发以及新任务分发的课程更新。第二任期内的本地优化表明,课程应该逐步扩展培训任务,易于努力。我们的Vivl算法用两个实际组件,任务扩展和实体进展实现了这种变分的范例,它在任务配置以及任务中的实体数量产生培训课程。实验结果表明,Vacl解决了大量代理商的稀疏奖励问题的集合。特别是,使用单个桌面机器,VACL在简单扩展的基准测试中实现了100个代理的98%覆盖率,并再现最初在Openai隐藏项目中显示的斜坡使用行为。我们的项目网站位于https://sites.google.com/view/vacl-neurips-2021。
translated by 谷歌翻译
本文提出了一种从单个交通相机提取3D世界中车辆的位置和姿势的方法。从驾驶员的角度来看,大多数先前的单眼3D车辆检测算法集中在车辆上的摄像机上,并假定了已知的内在和外在校准。相反,本文侧重于使用未校准单眼交通摄像头的相同任务。我们观察到,道路平面和图像平面之间的相同特法对于3D车辆检测和该任务的数据合成至关重要,并且可以在没有相机内在和外部的情况下估计同字。通过在逆透视映射中产生的鸟瞰图(BEV)图像中估计旋转边界盒(R箱)进行3D车辆检测。我们提出了一个名为Daileed R-Box的新的回归目标和双视网架构,该架构促进了翘曲的BEV图像上的检测精度。实验表明,尽管在训练期间没有看到它们的成像,所提出的方法可以推广到新的相机和环境设置。
translated by 谷歌翻译
Automatically identifying feature correspondences between multimodal images is facing enormous challenges because of the significant differences both in radiation and geometry. To address these problems, we propose a novel feature matching method, named R2FD2, that is robust to radiation and rotation differences.Our R2FD2 is conducted in two critical contributions, consisting of a repeatable feature detector and a rotation-invariant feature descriptor. In the first stage, a repeatable feature detector called the Multi-channel Auto-correlation of the Log-Gabor is presented for feature detection, which combines the multi-channel auto-correlation strategy with the Log-Gabor wavelets to detect interest points with high repeatability and uniform distribution. In the second stage, a rotation-invariant feature descriptor is constructed, named the Rotation-invariant Maximum index map of the Log-Gabor, which consists of two components: fast assignment of dominant orientation and construction of feature representation. In the process of fast assignment of dominant orientation, a Rotation-invariant Maximum Index Map is built to address rotation deformations. Then, the proposed RMLG incorporates the rotation-invariant RMIM with the spatial configuration of DAISY to depict a more discriminative feature representation, which improves RMLGs resistance to radiation and rotation variances.
translated by 谷歌翻译
图形神经网络(GNN)已被广泛用于建模图形结构化数据,这是由于其在广泛的实用应用中令人印象深刻的性能。最近,GNNS的知识蒸馏(KD)在图形模型压缩和知识转移方面取得了显着进步。但是,大多数现有的KD方法都需要大量的真实数据,这些数据在实践中不容易获得,并且可能排除其在教师模型对稀有或难以获取数据集培训的情况下的适用性。为了解决这个问题,我们提出了第一个用于图形结构化数据(DFAD-GNN)的无数据对抗知识蒸馏的端到端框架。具体而言,我们的DFAD-GNN采用生成性对抗网络,主要由三个组成部分组成:预先训练的教师模型和学生模型被视为两个歧视者,并利用生成器来衍生训练图来从教师模型进入学生模型。在各种基准模型和六个代表性数据集上进行的广泛实验表明,我们的DFAD-GNN在图形分类任务中显着超过了最新的无数据基线。
translated by 谷歌翻译
在本文中,提出了一个基于Chebyshev多项式优化(CHEVOPT)的后时间最大估计的新框架,它提出了将非线性连续时状态估计转换为恒定参数优化的问题。具体而言,随时间变化的系统状态由Chebyshev多项式表示,未知的Chebyshev系数通过最大程度地减少先验,动力学和测量的加权总和来优化。在最小二乘意义上,提出的CHEVOPT是最佳的连续时间估计,需要进行批处理处理。还提出了递归滑动窗口版本,以满足实时应用程序的要求。与众所周知的高斯过滤器相比,Chevopt可以更好地解决动力学和测量中的非线性。指示性示例的数值结果表明,所提出的Chevopt在扩展/无情的卡尔曼过滤器和扩展的批次/固定lag更平滑的情况下,取得了明显提高的精度,闭上了cramer-rao的下限。
translated by 谷歌翻译
本文提出了一个统一的神经网络结构,用于联合3D对象检测和点云分段。我们利用检测和分割标签的丰富监督,而不是使用其中一个。另外,基于广泛应用于3D场景和对象理解的隐式功能,提出了基于单级对象检测器的扩展。扩展分支从对象检测模块作为输入采用最终特征映射,并产生隐式功能,为其对应的体素中心产生每个点的语义分布。我们展示了我们在NUSCENES-LIDARSEG上的结构的表现,这是一个大型户外数据集。我们的解决方案在与对象检测解决方案相比,在3D对象检测和点云分割中实现了针对现有的方法的竞争结果。通过实验验证了所提出的方法的有效弱监管语义分割的能力。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译